Global and local distance-based generalized linear models
نویسندگان
چکیده
منابع مشابه
Global Coordination of Local Linear Models
High dimensional data that lies on or near a low dimensional manifold can be described by a collection of local linear models. Such a description, however, does not provide a global parameterization of the manifold—arguably an important goal of unsupervised learning. In this paper, we show how to learn a collection of local linear models that solves this more difficult problem. Our local linear...
متن کاملLocal linear regression for generalized linear models with missing data
Fan, Heckman and Wand (1995) proposed locally weighted kernel polynomial regression methods for generalized linear models and quasilikelihood functions. When the covariate variables are missing at random, we propose a weighted estimator based on the inverse selection probability weights. Distribution theory is derived when the selection probabilities are estimated nonparametrically. We show tha...
متن کاملLocal Linear Functional Regression based on Weighted Distance-Based Regression
We consider the problem of nonparametrically predicting a scalar response variable y from a functional predictor χ. We have n observations (χi, yi) and we assign a weight wi ∝ K (d(χ, χi)/h) to each χi, where d( · , · ) is a semi-metric, K is a kernel function and h is the bandwidth. Then we fit a Weighted (Linear) Distance-Based Regression, where the weights are as above and the distances are ...
متن کاملMonte Carlo Local Likelihood for Estimating Generalized Linear Mixed Models
We propose the Monte Carlo local likelihood (MCLL) method for estimating generalized linear mixed models (GLMMs) with crossed random e ects. MCLL initially treats model parameters as random variables, sampling them from the posterior distribution in a Bayesian model. The likelihood function is then approximated up to a constant by tting a density to the posterior samples and dividing it by the ...
متن کاملLinXGBoost: Extension of XGBoost to Generalized Local Linear Models
XGBoost is often presented as the algorithm that wins every ML competition. Surprisingly, this is true even though predictions are piecewise constant. This might be justified in high dimensional input spaces, but when the number of features is low, a piecewise linear model is likely to perform better. XGBoost was extended into LinXGBoost that stores at each leaf a linear model. This extension, ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: TEST
سال: 2015
ISSN: 1133-0686,1863-8260
DOI: 10.1007/s11749-015-0447-1